'saved_plots/top_10_words_across_speeches_chart.pdf'
Sentiments and Topics in South African SONA Speeches
~ STA5073Z Data Science for Industry Assignment 2
Abstract
Introduction
The field of Natural Language Processing (NLP) is faceted by techniques tailored for theme tracking and opinion mining which merge part of text analysis. Though, of particular prominence, is the extraction of latent thematic patterns and the establishment of the extent of emotionality expressed in political-based texts.
Given such political context, it is of specific interest to analyse the annual State of the Nation Address (SONA) speeches delivered by six different South African presidents (F.W. de Klerk, N.R. Mandela, T.M. Mbeki, K.P. Motlanthe, J.G. Zuma, and M.C. Ramaphosa) ranging over twenty-nine years (from 1994 to 2023). This analysis, descriptive and data-driven in nature, endeavours to examine the content of the SONA speeches in terms of themes via topic modelling (TM) and emotions via sentiment analysis (SentA). In general, as illustrated in Figure 1, this exploration will be double-bifurcated, executing the aforementioned techniques within a macro and micro context both at the text (all-presidents versus by-president SONA speeches, respectively) and token (sentences versus words, respectively) level.
Through such a multi-layered lens, the identification of any trends, both in terms of topics and sentiments, over time at both a large (presidents as a collective) as well as at a small (each president as an individual) scale is attainable. This explicates not only an aggregated perspective of the general political discourse prevailing within South Africa, but also a more niche outlook of the specific rhetoric employed by each of the country’s serving presidents during different date periods.
To achieve all of the above-mentioned analysis, it is first relevant to revise foundational terms and review related literature in context of politics and NLP. All pertinent pre-processing of the political text data is then considered, followed by a discussion delving into the details of each SentA and TM approach applied as part of the analysis. Specifically, three different lexicons are leveraged to describe sentiments, whilst five different topic models are tackled to uncover themes within South-African-presidents’ SONA speeches. Ensuing the implementation of these methodologies, the results thereof are detailed in terms insights and interpretations. Thereafter, an overall evaluation of the techniques in terms of efficacy and inadequacy is overviewed. Finally, focal findings are highlighted and potential improvements as part of future research are recommended.
Methods
Topic modelling
Latent Semantic Analysis (LSA)
LSA (Deerwester et al. 1990) is a non-probabilistic, non-generative model where a form of matrix factorization is utilized to uncover few latent topics, capturing meaningful relationships among documents/tokens. As depicted in Figure, in the first step, a document-term matrix DTM is generated from the raw text data by tokenizing d documents into w words (or sentences), forming the columns and rows respectively. Each row-column entry is either valued via the BoW or tf-idf approach. This DTM-matrix, which is often sparse and high-dimensional, is then decomposed via a dimensionality-reduction-technique, namely truncated Singular Value Decomposition (SVD). Consequently, in the second step the DTM-matrix becomes the product of three matrices: the topic-word matrix At* (for the tokens), the topic-prevalence matrix Bt* (for the latent semantic factors), and the transposed document-topic matrix CTt* (for the document). Here, t*, the optimal number of topics, is a hyperparameter which is refined at a value (either via the Silhouette-Coefficient or the coherence-measure approach) that retains the most significant dimensions in the transformed space. In the final step, the text data is then encoded using this top-topic number.
Given LSA only implicates a DTM-matrix, the implementation thereof is generally efficient. Though, with the involvement of truncated SVD, some computational intensity and a lack of quick updates with new, incoming text-data can arise. Additional LSA drawbacks include: the lack of interpretability, the underlying linear-model framework (which results in poor performance on text-data with non-linear dependencies), and the underlying Gaussian assumption for tokens in documents (which may not be an appropriate distribution).
Probabilistic Latent Semantic Analysis (pLSA)
Instead of implementing truncated SVD, pLSA (Hofmann 1999) rather utilizes a generative, probabilistic model. Within this framework, a document d is first selected with probability P(d). Then given this, a latent topic t is present in this selected document d and so chosen with probability of P(t|d). Finally, given this chosen topic t, a word w (or sentence) is generated from it with probability P(w|t), as shown in Figure. It is noted that the values of P(d) is determined directly from the corpus D which is defined in terms of a DTM matrix. In contrast, the probabilities P(t|d) and P(w|t) are parameters modelled as multinomial distributions and iteratively updated via the Expectation-Maximization (EM) algorithm. Direct parallelism between LSA and pLSA can be drawn via the methods’ parameterization, as conveyed via matching colours of the topic-word matrix and P(w|t), the document-topic matrix and P(d|t) as well as the topic-prevalence matrix and P(t) displayed in Figure and Figure, respectively.
Despite pLSA implicitly addressing LSA-related disadvantages, this method still involves two main drawbacks. There is no probability model for the document-topic probabilities P(t|d), resulting in the inability to assign topic mixtures to new, unseen documents not trained on. Model parameters also then increase linearly with the number of documents added, making this method more susceptible to overfitting.
Latent Dirichlet Allocation
LDA is another generative, probabilistic model which can be deemed as a hierarchical Bayesian version of pLSA. Via explicitly defining a generative model for the document-topic probabilities, both the above-mentioned pitfalls of pLSA are improved upon. The number of parameters to estimate drastically decrease and the ability to apply and generalize to new, unseen documents is attainable. As presented in Figure, the initial steps first involve randomly sampling a document-topic probability distribution (\(\theta\)) from a Dirichlet (Dir) distribution (\(\eta\)), followed by randomly sampling a topic-word probability distribution (\(\phi\)) from another Dirichlet distribution (\(\tau\)). From the \(\theta\) distribution, a topic t is selected by drawing from a multinomial (Mult) distribution (third step) and from the \(\phi\) distribution given said topic t, a word w (or sentences) is sampled from another multinomial distribution (fourth step). The associated LDA-parameters are then estimated via a variational expectation maximization algorithm or collapsed Gibbs sampling.
Read in the data
Exploratory Data Analysis
Sentiment analysis
Topic modelling
LSA
(0, '0.267*"year" + 0.242*"government" + 0.198*"work" + 0.195*"south" + 0.188*"people" + 0.163*"country" + 0.145*"development" + 0.142*"national" + 0.140*"programme" + 0.134*"african"')
(1, '-0.169*"government" + 0.146*"south" + -0.142*"regard" + 0.135*"year" + -0.134*"people" + 0.115*"energy" + 0.114*"000" + -0.113*"shall" + -0.112*"ensure" + -0.102*"question"')
(2, '0.140*"honourable" + 0.131*"programme" + -0.125*"pandemic" + 0.123*"continue" + -0.115*"new" + 0.110*"development" + 0.109*"rand" + -0.107*"great" + 0.106*"compatriot" + -0.102*"investment"')
(3, '0.337*"alliance" + 0.240*"transitional" + 0.204*"party" + 0.204*"constitution" + 0.156*"zulu" + 0.155*"constitutional" + 0.131*"south" + 0.126*"concern" + 0.125*"election" + 0.122*"freedom"')
(4, '-0.219*"shall" + 0.204*"people" + -0.148*"year" + 0.144*"alliance" + -0.130*"start" + 0.101*"government" + 0.097*"address" + 0.093*"transitional" + -0.088*"community" + -0.088*"citizen"')
pLSA (Probabilistic Latent Semantic Analysis)
[(0,
'0.001*"year" + 0.001*"government" + 0.001*"work" + 0.001*"south" + 0.001*"people" + 0.001*"country" + 0.001*"development" + 0.001*"national" + 0.001*"programme" + 0.001*"continue"'),
(1,
'0.001*"year" + 0.000*"south" + 0.000*"government" + 0.000*"work" + 0.000*"country" + 0.000*"african" + 0.000*"people" + 0.000*"africa" + 0.000*"development" + 0.000*"programme"'),
(2,
'0.001*"government" + 0.001*"year" + 0.000*"people" + 0.000*"south" + 0.000*"work" + 0.000*"country" + 0.000*"ensure" + 0.000*"african" + 0.000*"programme" + 0.000*"service"'),
(3,
'0.001*"government" + 0.001*"year" + 0.001*"people" + 0.001*"south" + 0.001*"work" + 0.000*"african" + 0.000*"country" + 0.000*"national" + 0.000*"africa" + 0.000*"development"'),
(4,
'0.000*"year" + 0.000*"work" + 0.000*"government" + 0.000*"people" + 0.000*"development" + 0.000*"national" + 0.000*"country" + 0.000*"south" + 0.000*"african" + 0.000*"programme"')]
LDA (Latent Dirichlet Allocation)
[(0,
'0.001*"year" + 0.001*"south" + 0.000*"government" + 0.000*"work" + 0.000*"people" + 0.000*"development" + 0.000*"african" + 0.000*"country" + 0.000*"national" + 0.000*"africa"'),
(1,
'0.001*"year" + 0.001*"people" + 0.001*"government" + 0.000*"country" + 0.000*"national" + 0.000*"development" + 0.000*"work" + 0.000*"african" + 0.000*"ensure" + 0.000*"south"'),
(2,
'0.001*"government" + 0.001*"year" + 0.001*"south" + 0.001*"people" + 0.001*"work" + 0.001*"country" + 0.001*"programme" + 0.001*"african" + 0.001*"development" + 0.001*"national"'),
(3,
'0.001*"government" + 0.001*"work" + 0.001*"year" + 0.001*"south" + 0.001*"people" + 0.001*"country" + 0.001*"development" + 0.001*"ensure" + 0.001*"programme" + 0.001*"national"'),
(4,
'0.000*"year" + 0.000*"south" + 0.000*"work" + 0.000*"people" + 0.000*"government" + 0.000*"national" + 0.000*"programme" + 0.000*"country" + 0.000*"africa" + 0.000*"african"')]